Google Still Doesn't Get It

Image prompted by the author.

Google is now in full damage-control mode following last week's embarrassing revelations about the Woke bias baked deep into the company's Gemini large language model — aka, artificial intelligence.

Advertisement

Google CEO Sundar Pichai wrote to employees in an internal memo, "I know that some of its responses have offended our users and shown bias — to be clear, that's completely unacceptable and we got it wrong."

Pichai's memo doesn't address the root issue. The problem isn't that users were offended. The problem isn't that Gemini has "shown bias," although that phrase is a tell. The problem is that Gemini is biased, and that the rot goes to the heart of its code.

To Pichai, the problem seems to be that we're offended because Gemini's bias was shown — that we could see it.

Another Google executive, Prabhakar Raghavan, explained in a blog post, "If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don't just want to only receive images of people of just one type of ethnicity (or any other characteristic)."

So far, so good. Raghavan then wrote that Gemini "failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive."

But that's not what users saw with our own eyes. What we saw was an undeniable bias that went only one way. All the so-called "anodyne prompts" involved conservatives or Republicans that Gemini was happy to compare to Hitler, a stubborn refusal to make such comparisons about progressives or Democrats, and the near-erasure of white men. 

Advertisement

As I wrote last week when the Gemini image generator's biases were first revealed, Google didn't have to invent some Never Produce White Guys algorithm for Gemini's image generator. That's because Gemini's image generator was producing results based on the biases already built into the Gemini chatbot.

Google has a trust issue, blown up into the public consciousness in a way that pictures — in this case, pictures of black Vikings, Native American Founding Fathers, and Asian soldiers in a French WWI trench (see above) — make explicitly undeniable. 

Whether it's a search engine or an LLM like Gemini, Google's products are black boxes users can't see into. We don't understand how they work — and in the case of LLMs, their creators don't even fully understand how they produce the results they do. Because we can't see (or understand, even if we could see) how the black box works, the company must constantly engender trust that after you type a request into the box, what pops out of it will be the best possible result.

Conservatives and others have long complained that Google's black boxes were no longer trustworthy — that what came out of the black box wasn't the best info you needed but what Google wanted you to see.

Advertisement

"Gemini wasn't built to serve different users," I concluded last week. "It was built by Google to 'fix' problematic attitudes like yours and mine."

Nothing I've read from Pichai or Raghavan this week makes me believe that anything about their attitudes has changed about "correcting" ours.

Recommended: SCOTUS Justice Jackson Just Said the Dumbest Thing About Guns and I Can't Stop Laughing

P.S. Help PJ Media keep getting the truth out about Big Tech by becoming one of our VIP or VIP Gold supporters. You need independent news and analysis and we need to keep the lights on. You can join here and don't forget our massive 50% off SAVEAMERICA promo code.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement